Goto

Collaborating Authors

 feasible point








A Appendix

Neural Information Processing Systems

A.1 On the ES fairness notion In this paper, we defined the ES fairness notion as follows, Pr {E Consider classifier R = r (X,A). A.4 Restating Theorem 5 for the statistical parity (SP) fairness notion Here we restate Theorem 5 for the statistical parity. The proof is similar to the proof of Theorem 5. Note that ( X,Y) and A are conditionally independent given A . Pr{r (X, 0) = ˆy |Y = 1,A = 0 } A.7 Numerical Experiment We compared EO and ES fairness notions in Table 2 after adding the following constraints to (13). Next, we prove the second part of the theorem.



Improved Scalable Lipschitz Bounds for Deep Neural Networks

Syed, Usman, Hu, Bin

arXiv.org Machine Learning

Computing tight Lipschitz bounds for deep neural networks is crucial for analyzing their robustness and stability, but existing approaches either produce relatively conservative estimates or rely on semidefinite programming (SDP) formulations (namely the LipSDP condition) that face scalability issues. Building upon ECLipsE-Fast, the state-of-the-art Lipschitz bound method that avoids SDP formulations, we derive a new family of improved scalable Lipschitz bounds that can be combined to outperform ECLipsE-Fast. Specifically, we leverage more general parameterizations of feasible points of LipSDP to derive various closed-form Lipschitz bounds, avoiding the use of SDP solvers. In addition, we show that our technique encompasses ECLipsE-Fast as a special case and leads to a much larger class of scalable Lipschitz bounds for deep neural networks. Our empirical study shows that our bounds improve ECLipsE-Fast, further advancing the scalability and precision of Lipschitz estimation for large neural networks.


Efficient Online Linear Optimization with Approximation Algorithms

Dan Garber

Neural Information Processing Systems

We revisit the problem of online linear optimization in case the set of feasible actions is accessible through an approximated linear optimization oracle with a factor multiplicative approximation guarantee. This setting is in particular interesting since it captures natural online extensions of well-studied offline linear optimization problems which are NP-hard, yet admit efficient approximation algorithms. The goal here is to minimize the -regret which is the natural extension of the standard regret in online learning to this setting. We present new algorithms with significantly improved oracle complexity for both the full information and bandit variants of the problem.